There's a famous not-actually-a-theorem in computer science, credited to David Wheeler and quoted at conferences by people who think academic phrasing makes software insights sound profound:

"We can solve any problem by introducing an extra level of indirection."

And its punchline, which everyone conveniently forgets to mention:

"…except for the problem of too many levels of indirection."

This joke has survived since the 1990s because every developer has lived it. You're debugging something, three levels deep, bouncing between files, and you've completely lost track of what you were originally looking for. AI coding assistants, weirdly, have made this both better and worse. Claude Code can follow these rabbit holes without getting lost — it doesn't have our human limitations about working memory. But that just means you can build even more elaborate mazes before anyone notices they're unnavigable.

The real issue is that we conflate two different concepts when we talk about "splitting things apart." One is layers — conceptual boundaries that separate concerns by purpose. The other is levels of indirection — breaking code into smaller units that call each other. They're related, but confusing them is how you end up with architectures that look beautiful on whiteboards and make you want to quit programming when you have to debug them.

The Three Layers Everyone (Sort Of) Agrees On

You know this pattern. Controller, service, repository. Presentation, business logic, data access. MVC if you're feeling retro. The vocabulary changes — there are so many vocabularies — but the idea persists: separate code by its role in the system.

Controller handles the outside world. HTTP requests, queue messages, GraphQL queries. Does some validation, maybe formatting, hands the work off. Business layer orchestrates the actual logic. Data layer talks to databases and external services.

Three neat boxes. Clean separation of concerns.

Anyone who's built anything beyond a to-do app knows what happens next.

The Fat Middle Problem

The business logic layer becomes the junk drawer of your application. It fetches data from this service, correlates it with that database, applies these rules, makes those API calls, handles transactions, stores results. This is procedural work — a sequence of actions that must happen in order — and it accumulates relentlessly.

The controller stays thin because its job is narrow. The data layer stays thin because it's basically a translator. But the business layer? Its responsibility is "do the thing the business needs." And businesses need everything.

So it grows. And eventually someone looks at a 500-line service class and says what we all say: "This needs to be broken up."

They're right. But how you break it up determines whether you solve the problem or just distribute it more creatively.

When Breaking Up Creates the Maze

When you split that fat service, you're usually not adding a new layer — there's no new conceptual boundary with a distinct purpose. You're adding levels of indirection: smaller pieces calling each other within the same conceptual space.

Class A becomes A1, which calls A2. A2 grows and becomes A2a and A2b. Each piece looks focused and testable. The whole thing becomes a navigation nightmare.

I've been watching Claude Code work through these codebases, and it's fascinating. It can follow call chains across dozens of files without losing context. It remembers which UserManager calls which NotificationService calls which EmailProvider. But here's the thing: when it tries to make changes to these scattered systems, it often gets the relationships wrong. Not because it can't follow the maze, but because the maze itself doesn't reflect coherent business logic.

The AI reveals something we should have known: if your codebase is hard for an assistant that never forgets to work with safely, the problem isn't cognitive — it's architectural.

The Repository Pattern Trap

Let me show you this concretely. You start with the repository pattern — one repository per database table, basic CRUD operations.

But business operations don't speak CRUD. Nobody says "update the user's is_active field to false." They say "deactivate the user." And deactivation involves updating status, recording timestamps, triggering notifications, adjusting billing state.

Where does this logic live? Put it in the service layer, and the service needs intimate knowledge of multiple repositories. The service gets fat again. Introduce an intermediate "manager" that wraps repositories and speaks business language, and now you have controller → service → manager → repository → database. Five hops for what should be a straightforward operation.

And what do you call this manager thing? "UserManager" tells you nothing except that it sits between other things. The vagueness is a symptom: this level of indirection doesn't correspond to any real conceptual boundary.

I've been experimenting with having AI generate different architectural approaches for the same feature. When I ask it to build "deactivate user" functionality five different ways, the scattered approaches consistently produce more complex implementations than the fat-class versions. The AI can implement the complexity, but it can't resolve the fundamental question: are you organizing by data shape or by business operation?

How We Actually Navigate Code

Here's the deeper issue: when you understand a codebase, you don't think in file paths. You think in conceptual relationships. You know that "discount logic lives somewhere near pricing" without remembering exact filenames. You navigate by mental model, not folder structure.

Simon Brown's C4 model gets this right with its map analogy. Planning a road trip? You need the country view. Finding a restaurant? You need street detail. Both useful, neither replaceable. You zoom between levels dynamically.

Code organization is flat. Files in folders. Your IDE's "go to definition" bounces you between files with no sense of the larger story. The file system doesn't know that user_manager.py and notification_service.py are intimately connected in the deactivation flow.

This is where AI assistants show their limitations. They can read your entire codebase and understand the relationships, but they can't fix poor conceptual organization. If your mental model of "how user deactivation works" requires jumping between seven files, an AI can navigate those seven files perfectly — but it can't tell you that maybe the real problem is having seven files in the first place.

What Actually Helps

I don't think there's a clean answer, but some principles seem worth holding onto:

Every level of indirection should earn its existence. It should represent a genuinely distinct concept, not just a smaller chunk. If you can't explain why this is separate without saying "it was getting too big," you're probably just moving complexity around.

Keep levels minimal and dynamic. Cognitive science tells us working memory is tiny. Use the fewest levels that make sense for this specific complexity, not a number from a pattern book.

Match your conceptual taxonomy to your file taxonomy. If your folder structure doesn't reflect how people think about the code, fix the structure.

Trace the debugging journey. Before adding indirection, follow the path someone will take six months from now, trying to understand why something isn't working. If they'll be lost by the third hop, you're not helping.

The dream is visual debugging — something that shows program flow like Factorio shows factory chains. Boxes and flows, zoomable from system overview to function details. We process spatial relationships effortlessly when they're visual instead of textual.

With AI getting better at code understanding, maybe this is less impossible than it seemed. In the meantime, we navigate the maze. The least we can do is be honest about the cost of each wall we add.